Pre-trained models have achieved remarkable success in natural language processing (NLP). However, existing pre-training methods underutilize the benefits of language understanding for generation. Inspired by the idea of Generative Adversarial Networks (GANs), we propose a GAN-style model for encoder-decoder pre-training by introducing an auxiliary discriminator, unifying the ability of language understanding and generation in a single model. Our model, named as GanLM, is trained with two pre-training objectives: replaced token detection and replaced token denoising. Specifically, given masked source sentences, the generator outputs the target distribution and the discriminator predicts whether the target sampled tokens from distribution are incorrect. The target sentence is replaced with misclassified tokens to construct noisy previous context, which is used to generate the gold sentence. In general, both tasks improve the ability of language understanding and generation by selectively using the denoising data. Extensive experiments in language generation benchmarks show that GanLM with the powerful language understanding capability outperforms various strong pre-trained language models (PLMs) and achieves state-of-the-art performance.
translated by 谷歌翻译
The robustness of Text-to-SQL parsers against adversarial perturbations plays a crucial role in delivering highly reliable applications. Previous studies along this line primarily focused on perturbations in the natural language question side, neglecting the variability of tables. Motivated by this, we propose the Adversarial Table Perturbation (ATP) as a new attacking paradigm to measure the robustness of Text-to-SQL models. Following this proposition, we curate ADVETA, the first robustness evaluation benchmark featuring natural and realistic ATPs. All tested state-of-the-art models experience dramatic performance drops on ADVETA, revealing models' vulnerability in real-world practices. To defend against ATP, we build a systematic adversarial training example generation framework tailored for better contextualization of tabular data. Experiments show that our approach not only brings the best robustness improvement against table-side perturbations but also substantially empowers models against NL-side perturbations. We release our benchmark and code at: https://github.com/microsoft/ContextualSP.
translated by 谷歌翻译
The task of text-to-SQL is to convert a natural language question to its corresponding SQL query in the context of relational tables. Existing text-to-SQL parsers generate a "plausible" SQL query for an arbitrary user question, thereby failing to correctly handle problematic user questions. To formalize this problem, we conduct a preliminary study on the observed ambiguous and unanswerable cases in text-to-SQL and summarize them into 6 feature categories. Correspondingly, we identify the causes behind each category and propose requirements for handling ambiguous and unanswerable questions. Following this study, we propose a simple yet effective counterfactual example generation approach for the automatic generation of ambiguous and unanswerable text-to-SQL examples. Furthermore, we propose a weakly supervised model DTE (Detecting-Then-Explaining) for error detection, localization, and explanation. Experimental results show that our model achieves the best result on both real-world examples and generated examples compared with various baselines. We will release data and code for future research.
translated by 谷歌翻译
Multimodal Machine Translation (MMT) focuses on enhancing text-only translation with visual features, which has attracted considerable attention from both natural language processing and computer vision communities. Recent advances still struggle to train a separate model for each language pair, which is costly and unaffordable when the number of languages increases in the real world. In other words, the multilingual multimodal machine translation (Multilingual MMT) task has not been investigated, which aims to handle the aforementioned issues by providing a shared semantic space for multiple languages. Besides, the image modality has no language boundaries, which is superior to bridging the semantic gap between languages. To this end, we first propose the Multilingual MMT task by establishing two new Multilingual MMT benchmark datasets covering seven languages. Then, an effective baseline LVP-M3 using visual prompts is proposed to support translations between different languages, which includes three stages (token encoding, language-aware visual prompt generation, and language translation). Extensive experimental results on our constructed benchmark datasets demonstrate the effectiveness of LVP-M3 method for Multilingual MMT.
translated by 谷歌翻译
近年来,破坏预测取得了迅速的进展,尤其是在机器学习(ML)的方法中。理解为什么预测因子使某个预测与未来Tokamak破坏预测指标的预测准确性一样至关重要。大多数破坏预测因素的目的是准确性或跨机能力。但是,如果可以解释中断预测模型,则可以说明为什么某些样品被归类为中断前体。这使我们能够说出传入的破坏类型,并使我们深入了解破坏机制。本文根据J-TEXT上的物理引导特征提取(IDP-PGFE)设计了一种称为可解释的破坏预测变量的破坏预测变量。通过提取物理引导的特征有效地改善了模型的预测性能。需要高性能模型来确保解释结果的有效性。 IDP-PGFE的可解释性研究提供了对J-Text破坏的理解,并且通常与现有的破坏理解一致。 IDP-PGFE已被应用于破坏,因为在J文本上的密度极限实验的密度不断增加。 PGFE的时间演变具有贡献,表明ECRH的应用触发了辐射引起的破坏,从而降低了破坏时的密度。虽然RMP的应用确实提高了J文本中的密度极限。解释性研究指导了RMP不仅会影响MHD不稳定性,而且还会影响辐射轮廓的密度极限破坏的物理机制,从而延迟了密度极限的破坏。
translated by 谷歌翻译
大多数当前的多模式摘要方法遵循级联的方式,在该方式中,首先使用现成的对象检测器来提取视觉特征,然后将这些功能与语言表示融合在一起,以使用编码器模型生成摘要。级联的方式无法捕获图像和段落之间的语义一致性,这对于确切的摘要至关重要。在本文中,我们向vil-sum提出了段落级级\ textbf {vi} sion- \ textbf {l} arnguage语义对齐和多模式\ textbf {sum} marization。 VIL-SUM的核心是一个联合多模式编码器,具有两个精心设计的任务,图像重新排序和图像选择。联合多模式编码器捕获了模式之间的交互,重新排序任务指导该模型学习段落级别的语义对齐,而选择任务指导模型在最终摘要中将模型指向所选摘要相关的图像。实验结果表明,我们提出的VIL-SUM显着优于当前最新方法。在进一步的分析中,我们发现两个精心设计的任务和联合多模式编码器可以有效地指导模型学习合理的段落图像和摘要图像关系。
translated by 谷歌翻译
完全监督的对数异常检测方法需要大量标记的数据才能实现有希望的性能。因此,如何减轻注释大量未标记的日志数据的沉重负担受到了很多关注。最近,已经提出了许多半监督对数异常检测方法,以借助于标记的正常数据解析的模板来降低注释成本。但是,这些方法通常独立考虑每个关键字,这无视日志事件中关键字之间的相关性以及日志序列之间的上下文关系。在本文中,我们提出了一个新型的弱监督的对数异常检测框架,名为Loglg,以探索序列中关键字之间的语义连接。具体而言,我们设计了一个迭代过程,首先提取未标记的日志的关键字以在每次迭代中构造日志事件图。然后,我们构建一个子记录注释,以更改为未标记的日志序列生成伪标签的目的,以注释相应的log-subgraphs。为了改善注释质量,我们采取了自我监督的任务来预先培训子图注释。之后,使用子图注释者生成的伪标签训练对数异常检测模型。在分类结果的条件下,我们从分类的日志序列重新提取关键字,并为下一个迭代更新日志事件图。五个基准的实验验证了LogLG在未标记的日志数据上检测异常的有效性,并证明与现有的半监督方法相比,Loglg作为最新的弱监督方法,可以取得重大改进。
translated by 谷歌翻译
无监督的摘要方法通过纳入预训练的语言模型的表示形式来取得了显着的结果。但是,当输入文档非常长的同时,现有方法无法考虑效率和有效性。为了解决这个问题,在本文中,我们提出了一个基于语义块的无监督长期文档摘要,提议有效的粗到1个方面的排名(C2F-FAR)框架。语义块是指描述相同方面的文档中的连续句子。具体而言,我们通过将一步排名方法转换为层次多范围两阶段排名来解决此问题。在粗级阶段,我们提出了一种新的段算法,将文档拆分为相关的语义块,然后过滤量微不足道的块。在精细阶段,我们在每个块中选择显着句子,然后从选定的句子中提取最终摘要。我们在四个长文档摘要数据集上评估了我们的框架:Gov-Report,Billsum,Arxiv和PubMed。我们的C2F-FAR可以在Gov-Report和Billsum上实现新的无监督摘要结果。此外,我们的方法比以前的方法高4-28倍。
translated by 谷歌翻译
变压器结构由一系列编码器和解码器网络层堆叠,在神经机器翻译中实现了重大发展。但是,假设下层提供了微不足道或冗余的信息,那么香草变压器主要利用顶层表示形式,从而忽略了潜在有价值的底层特征。在这项工作中,我们提出了组转换器模型(GTRAN),该模型将编码器和解码器的多层表示分为不同的组,然后融合这些组特征以生成目标词。为了证实所提出方法的有效性,对三个双语翻译基准和两个多语言翻译任务进行了广泛的实验和分析实验,包括IWLST-14,IWLST-17,IWLST-17,LDC,WMT-14和OPUS-100基准。实验和分析结果表明,我们的模型通过一致的增益优于其变压器对应物。此外,它可以成功扩展到60个编码层和36个解码器层。
translated by 谷歌翻译
通过多种语言对培训的多语言神经机器翻译(MNMT),由于模型参数的较少和较低的培训成本,通过在多种语言之间共享知识,引起了人们的关注。尽管如此,由于不同翻译方向之间的负面干扰,尤其是在高资源语言上,因此,多语言培训在共享参数中受到语言干扰退化的困扰。在本文中,我们提出了具有高资源语言特定培训(HLT-MT)的多语言翻译模型,以减轻负面干扰,该干扰采用了具有特定于语言的选择机制的两阶段培训。具体而言,我们首先仅使用高资源对训练多语言模型,然后选择解码器顶部的语言特定模块,以增强高资源方向的翻译质量。接下来,对所有可用语料库进行进一步培训,将知识从高资源语言(HRLS)转移到低资源语言(LRLS)。实验结果表明,HLT-MT在WMT-10和Opus-100基准测试上的表现优于各种强基础。此外,分析实验验证了我们方法在减轻多语言训练中负面干扰方面的有效性。
translated by 谷歌翻译